Skip to main content
TrustRadius
Google Cloud Pub/Sub

Google Cloud Pub/Sub

Overview

What is Google Cloud Pub/Sub?

Google offers Cloud Pub/Sub, a managed message oriented middleware supporting many-to-many asynchronous messaging between applications.

Read more
Recent Reviews

Reliable Pub/Sub Vendor

10 out of 10
October 26, 2020
Google Cloud Pub/Sub is used for processing scheduled tasks, sending notifications to clients and other background job execution. It helps …
Continue reading
Read all reviews

Awards

Products that are considered exceptional by their customers based on a variety of criteria win TrustRadius awards. Learn more about the types of TrustRadius awards to make the best purchase decision. More about TrustRadius Awards

Return to navigation

Product Details

What is Google Cloud Pub/Sub?

Google Cloud Pub/Sub Technical Details

Operating SystemsUnspecified
Mobile ApplicationNo

Frequently Asked Questions

Google offers Cloud Pub/Sub, a managed message oriented middleware supporting many-to-many asynchronous messaging between applications.

Reviewers rate Support Rating highest, with a score of 9.8.

The most common users of Google Cloud Pub/Sub are from Mid-sized Companies (51-1,000 employees).
Return to navigation

Comparisons

View all alternatives
Return to navigation

Reviews and Ratings

(23)

Attribute Ratings

Reviews

(1-3 of 3)
Companies can't remove reviews or game the system. Here's why
Cézar Augusto Nascimento e Silva | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
We used Google Cloud Pub/Sub to solve ETL/Streaming and real-time processing problems for high volumes of data. We used it either to fill datalakes, process and store in warehouses or data marts and also for processing events, either using JSON or protobuf.

This was integrated for many languages such as python, java, golang and kotlin. We had configured kubernetes auto scaling system based on some Google Cloud Pub/Sub metrics which worked very well. The main observed metrics for alerts and overall health indicator of our systems were both the size of each queue and the oldest message in queue, either indicating a high volume jam or some random specific error for a single message, respectively.

We had to handle idempotency since a duplicated message delivery is a possibility, this was usually paired and a Redis Cache to guarantee idempotency for a reasonable time window.
  • Data Streaming
  • Even Sourcing
  • Protobuf message format
  • Scalability
  • Easy to Use
  • Observability
  • Integrated Dead Letter Queue (DLQ) functionality
  • Deliver Once (idempotency) - currently in preview
  • Vendor locked to Google
If you want to stream high volumes of data, be it for ETL streaming or event sourcing, Google Cloud Pub/Sub is your go-to tool. It's easy to learn, easy to observe its metrics and scales with ease without additional configuration so if you have more producers of consumers, all you need to do is to deploy on k8s your solutions so that you can perform autoscaling on your pods to adjust to the data volume. The DLQ is also very transparent and easy to configure. Your code will have no logic whatsoever regarding orchestrating pubsub, you just plug and play.

However, if you are not in the Google Cloud Pub/Sub environment, you might have trouble or be most likely unable to use it since I think it's a product of Google Cloud.
  • DLQ (Dead Letter Queues)
  • Scalability
  • Delivering backoff for failed messages
  • Scalable System
  • Better Alerts (observability)
  • Auto Scaling
Kafka looks like and ordered queue, there no deliver backoff, so if a message has a problem, it doesn't advance to the next one. Google Cloud Pub/Sub looks like more a SET of messages, and kafka like a LIST. In kafka a same message will repeat instantaneously while it is being NACKED, on the other hand Google Cloud Pub/Sub will just deliver another and apply a backoff to the previous NACKED one, never stopping on a single message forever. Dead Letter Queues are innate to Google Cloud Pub/Sub while in Kafka is not a feature at all. One can configure the maximum amount of NACKs before sending a message to the DLQ, this is very powerful if combined with exponential backoff. Google Cloud Pub/Sub lets users scale consumers at will with no constraint, Kafka has a rather convoluted concept of partitions and consumer groups that requires one to plan ahead how many consumers they would like to plug to the queue because these constraints prevent a free scaling of consumers. If you plan for N consumers, you have to probably stick to that N forever or recreate your Kafka topic, in Confluent Cloud this is somewhat solved, but looks like a solution to a problem that should even exist in the first place.
100
Data and AI engineers
10
SRE (Site Reliability Engineer)
  • ETL data Streaming
  • Event Sourcing
It serves all of our purposes in the most transparent way I can imagine, after seeing other message queueing providers, I can only attest to its quality.
It's just right, everything is configurable and very easy to understand.
Always set a good exponential backoff to avoid processing the same failed message over and over. Consider using performatic data formats like protobuf instead of simple JSON when dealing with predictable structures of data.
No - we have not done any customization to the interface
No - the product does not support adding custom code
It has many libraries in many languages, google provides either good guides or they're AI generated code libraries that are easy to understand. It has very good observability too.
  • Dead Letter Queue (DLQ)
  • Exponential Backoff
  • Priority Queueing
  • Deliver Once
You can just plug in consumers at will and it will respond, there's no need for further configuration or introducing new concepts. You have a queue, if it's slow, you plug in more consumers to process more messages: simple as that.
I have never faced a single problem in 4 years.
It's very fast, can be even better if you use protobuf.
Score 9 out of 10
Vetted Review
Verified User
Incentivized
Fully Managed messaging middleware useful for service integration. We have mainly used it for large scale logs from Kubernetes clusters. Pub/Sub proved to be easy to use, with a clear, user-friendly UI, well documented SDK, built-in elasticity and was the foundation for a real time data management system. Overall, very pleased with Google Cloud Pub/Sub.
  • Messaging
  • Scalable
  • No-ops
  • Secure
  • Only usable within GCP
Google Cloud Pub/Sub is ideal for any real-time data project, especially with large datasets in the terabytes per day range. We have used it successfully in a variety of use cases ranging from recommendations engines (where we’d need to ingest in real time people’s clickstream) to more ops related Kubernetes cluster provisioning and monitoring (based on Kubernetes control plane events).
  • Elasticity
  • Ease of use
  • Security
  • Positive ROI
Google Cloud Pub/Sub as a Managed service is significantly more easy to use than a self Managed Kafka cluster. As our software was already on GCP it was a no-brainer to use Pub/Sub due to the high level of integration and ease of use with other Google Cloud Platform services.
Score 9 out of 10
Vetted Review
Verified User
Incentivized
Google [Cloud] Pub/Sub is used for high Volume, high speed data stream from multiple data sources towards a Data Platform. This is setup by Central Data and Analytics team to serve business use cases for most of the departments like Marketing, Product, Sales, Editorial etc. within our organization

Business Problems addressed
  • Identify Non App users , their inactivity and show such users a personalized promo on the website when they visit
  • Identify low engaged users overall, ingest their past usage patterns and cluster them in lookalike user segments to show them next best actions
  • Capture users' behavior and interactions as "Topics" in Pub/Sub and show them next best action as appropriate
  • With a pub/sub architecture the consumer is decoupled in time from the publisher i.e. if the consumer goes down, it can replay any events that occurred during its downtime.
  • It also allows consumer to throttle and batch incoming data providing much needed flexibility while working with multiple types of data sources
  • A simple and easy to use UI on cloud console for setup and debugging
  • It enables event-driven architectures and asynchronous parallel processing, while improving performance, reliability and scalability
  • It is limited to work with the same platform but with different datasets at the same time, you must request a prior security authentication.
  • It can sometimes lead to unexpected charges, as Pub/Sub will automatically keep on retrying messages continuously, even if failures are due to permanent code-level issues.
  • Message re-deliveries don't apply for ingested services like with Python based client. Push messages tried to be delivered immediately and if your service is busy dealing with some other task, it won't be done OR goes into a queue
Well Suited for
  • One Source, multi subscribers scenario, where there is no issue with errors on multiple source datasets and anyone who is subscribed to a Topic receives the messages
  • The load of the messages are totally comfortable, the workspace inside the platform, the functions of messaging and calls of the properties are easy to use
  • Reliable and Scalable model for creating and maintaining a big data pipeline
Less Appropriate for:
  • Ingesting multiple datasets from same data sources at the same time
  • Many Google Cloud Pub/Sub classes are concrete and not interfaces, making them harder to plan for when writing unit or integration tests
  • Ability to create Big Data infrastructure without bothering much about managing it: We create topics and subscriptions programmatically without having to set up any queues in advance. This makes deployments of new versions easier as well
  • Asynchronous communication provides an incredible advantage in reading messages, connections to services and data systems installed inside and outside the platform, is easy to manage and integrate via APIs
  • it's easy to set up between apps locally as well as globally. My team can use it to send messages that trigger front end messages to our users, or to send large chunks of data around our global system for the storage purpose as well
  • Increased Efficiency with reliable and Google managed services up all the time wit Disaster Recovery in place as well
  • Definitely Lower costs being a cloud based solution and easier to setup
  • Faster Project delivery and go to market plan for the business use cases basis this technology at the back end
  • Easy to setup Publisher, Subscribers and Message Queue service
  • More Reliable and Easy Scalable with Google Managed services
  • Easily integrated with most of the data sources we typically use for Data Storage and Analysis
  • 10k Topics is a good enough number to build and deliver the business use cases
  • Asynchronous and fallback mechanisms are great to ensure parallel delivery of the messages
Amazon Redshift, Google Analytics 360 (formerly Google Analytics Premium), Adobe Analytics, Amazon Simple Notification Service (SNS), Tableau Desktop, Microsoft Power BI, QlikView
Return to navigation